Background/Objectives: Biosimilar studies use overall response rate to assess clinical similarity. Sample size and power depend on the equivalence margin, defined in either risk difference or risk ratio scale. This manuscript investigates how different evaluation metrics and varying response rates affect study power. Methods: Two numerical simulations are conducted. The first is designed to test in the risk difference scale, while the second tests in the risk ratio scale. Both simulations consider no difference between the biosimilar and reference product. Response rates vary from 0.1 to 0.9, and all scenarios are repeated 10,000 times. Results: The study shows inconsistent results in testing the equivalence of overall response rate across the risk difference and risk ratio scales, even when the hypotheses are mathematically equivalent. Consequently, the study is often under powered for testing in both scales. Additionally, study power is sensitive to outcome response rate deviation, with different directions of change in the two different evaluation metrics. Conclusions: Biosimilar study design should avoid the concept of converting equivalence margins between risk difference and risk ratio scales, assuming no change in study power. Careful strategies should be planned for estimating overall response rates for sample size assessments.
Loading....